5 research outputs found

    Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media

    Full text link
    We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification. The lab is completed with Task 5 on check-worthiness estimation in political debates and speeches. A total of 67 teams registered to participate in the lab (up from 47 at CLEF 2019), and 23 of them actually submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over the baselines on all tasks. Here we describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants, and we discuss some lessons learned. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification.Comment: Check-Worthiness Estimation, Fact-Checking, Veracity, Evidence-based Verification, Detecting Previously Fact-Checked Claims, Social Media Verification, Computational Journalism, COVID-1

    BIGIR at CLEF 2019: Automatic verification of Arabic claims over the web

    No full text
    With the proliferation of fake news and its prevalent impact on democracy, journalism, and public opinions, manual fact-checkers become unscalable to the volume and speed of fake news propagation. Automatic fact-checkers are therefore needed to prevent the negative impact of fake news in a fast and effective way. In this paper, we present our participation in Task 2 of CLEF-2019 CheckThat! Lab, which addresses the problem of finding evidence over the Web for verifying Arabic claims. We participated in all of the four subtasks and adopted a machine learning approach in each with different set of features that are extracted from both the claim and the corresponding retrieved Web search result pages. Our models, trained solely over the provided training data, for the different subtasks exhibited relatively-good performance. Our official results, on the testing data, show that our best performing runs achieved the best overall performance in subtasks A and B among 7 and 8 participating runs respectively. As for subtasks C and D, our best performing runs achieved the median overall performance among 6 and 9 participating runs respectively.Scopu

    AraFacts: The First Large Arabic Dataset of Naturally-Occurring Professionally-Verified Claims

    No full text
    We introduce AraFacts, the first large Arabic dataset of naturally-occurring claims collected from 5 Arabic fact-checking websites, e.g., Fatabyyano and Misbar, covering claims since 2016. Our dataset consists of 6,222 claims along with their factual labels and additional metadata, such as fact-checking article content, topical category, and links to posts or Web pages spreading the claim. Since the data is obtained from various fact-checking websites, we standardize the original claim labels to provide a unified label rating for all claims. Moreover, we provide revealing dataset statistics and motivate its use by suggesting possible research applications. The dataset is made publicly available for the research community.This work was made possible by NPRP grant No.: NPRP11S-1204-170060 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors

    Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media

    No full text
    We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification. The lab is completed with Task 5 on check-worthiness estimation in political debates and speeches. A total of 67 teams registered to participate in the lab (up from 47 at CLEF 2019), and 23 of them actually submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over the baselines on all tasks. Here we describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants, and we discuss some lessons learned. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification

    Heat transfer—a review of 2002 literature

    No full text
    corecore